23  Introduction to Parametric Tests

Parametric tests are a category of statistical tests that assume the underlying distribution of the data. These tests are based on assumptions regarding the population parameters (such as the mean and variance) and require that data follow a certain distribution, typically the normal distribution. Parametric tests are powerful tools in hypothesis testing and are widely used when the conditions regarding their use are met, as they can provide more accurate and reliable results compared to non-parametric tests under the right circumstances.

23.0.1 Key Assumptions of Parametric Tests

  1. Normality: The data should be normally distributed. This assumption can be checked using various tests, such as the Shapiro-Wilk test or the Kolmogorov-Smirnov test, and visually through Q-Q plots.

  2. Homogeneity of Variances: For tests comparing two or more groups, the variances within these groups should be approximately equal. This can be assessed using Levene’s test or the Bartlett’s test.

  3. Interval or Ratio Scale: The data should be measured on an interval or ratio scale, which provides a meaningful zero point and equal intervals between adjacent units of measurement.

  4. Independence: Observations must be independent of each other, meaning the data collected from one participant or observation does not influence the data from another.

23.0.2 Common Parametric Tests

  • t-test: Used to compare the means of two groups (independent or paired samples) to see if they are significantly different from each other. Variants include the one-sample t-test, independent samples t-test, and paired samples t-test.

  • ANOVA (Analysis of Variance): Used to compare the means among three or more groups to see if at least one group mean is significantly different. It includes one-way ANOVA for one independent variable and two-way ANOVA for two independent variables.

  • Linear Regression: Examines the linear relationship between two variables, estimating how a dependent variable changes as an independent variable changes.

  • Pearson Correlation Coefficient: Measures the strength and direction of the linear relationship between two continuous variables.

23.0.3 Advantages of Parametric Tests

  • Efficiency: When the assumptions are met, parametric tests are more efficient (require fewer samples to achieve the same level of statistical power) than their non-parametric counterparts.

  • Greater Statistical Power: They have more power to detect true effects or differences when the assumptions hold, making them preferable in many situations.

23.0.4 Limitations

  • Assumptions: The strict assumptions about the data distribution can be a limitation. If these assumptions are violated, the results of the parametric tests might not be valid.

  • Robustness: Some parametric tests are sensitive to outliers, which can significantly affect the results.